22 research outputs found

    Order theory for discrete gradient methods

    Get PDF
    We present a subclass of the discrete gradient methods, which are integrators designed to preserve invariants of ordinary differential equations. From a formal series expansion of the methods, we derive conditions for arbitrarily high order. We devote considerable space to the average vector field discrete gradient, from which we get P-series methods in the general case, and B-series methods for canonical Hamiltonian systems. Higher order schemes are presented and applied to the H\'enon-Heiles system and a Lotka-Volterra system.Comment: 45 pages, 5 figure

    Adaptive Energy Preserving Methods for Partial Differential Equations

    Full text link
    A method for constructing first integral preserving numerical schemes for time-dependent partial differential equations on non-uniform grids is presented. The method can be used with both finite difference and partition of unity approaches, thereby also including finite element approaches. The schemes are then extended to accommodate rr-, hh- and pp-adaptivity. The method is applied to the Korteweg-de Vries equation and the Sine-Gordon equation and results from numerical experiments are presented.Comment: 27 pages; some changes to notation and figure

    Shape analysis on Lie groups and homogeneous spaces

    Full text link
    In this paper we are concerned with the approach to shape analysis based on the so called Square Root Velocity Transform (SRVT). We propose a generalisation of the SRVT from Euclidean spaces to shape spaces of curves on Lie groups and on homogeneous manifolds. The main idea behind our approach is to exploit the geometry of the natural Lie group actions on these spaces.Comment: 8 pages, Contribution to the conference "Geometric Science of Information '17

    Dissipative numerical schemes on Riemannian manifolds with applications to gradient flows

    Full text link
    This paper concerns an extension of discrete gradient methods to finite-dimensional Riemannian manifolds termed discrete Riemannian gradients, and their application to dissipative ordinary differential equations. This includes Riemannian gradient flow systems which occur naturally in optimization problems. The Itoh--Abe discrete gradient is formulated and applied to gradient systems, yielding a derivative-free optimization algorithm. The algorithm is tested on two eigenvalue problems and two problems from manifold valued imaging: InSAR denoising and DTI denoising.Comment: Post-revision version. To appear in SIAM Journal on Scientific Computin

    Pseudo-Hamiltonian neural networks for learning partial differential equations

    Full text link
    Pseudo-Hamiltonian neural networks (PHNN) were recently introduced for learning dynamical systems that can be modelled by ordinary differential equations. In this paper, we extend the method to partial differential equations. The resulting model is comprised of up to three neural networks, modelling terms representing conservation, dissipation and external forces, and discrete convolution operators that can either be learned or be given as input. We demonstrate numerically the superior performance of PHNN compared to a baseline model that models the full dynamics by a single neural network. Moreover, since the PHNN model consists of three parts with different physical interpretations, these can be studied separately to gain insight into the system, and the learned model is applicable also if external forces are removed or changed.Comment: 33 pages, 14 figures; v2: minor changes to text, updated numerical experiment

    Learning Dynamical Systems from Noisy Data with Inverse-Explicit Integrators

    Full text link
    We introduce the mean inverse integrator (MII), a novel approach to increase the accuracy when training neural networks to approximate vector fields of dynamical systems from noisy data. This method can be used to average multiple trajectories obtained by numerical integrators such as Runge-Kutta methods. We show that the class of mono-implicit Runge-Kutta methods (MIRK) has particular advantages when used in connection with MII. When training vector field approximations, explicit expressions for the loss functions are obtained when inserting the training data in the MIRK formulae, unlocking symmetric and high-order integrators that would otherwise be implicit for initial value problems. The combined approach of applying MIRK within MII yields a significantly lower error compared to the plain use of the numerical integrator without averaging the trajectories. This is demonstrated with experiments using data from several (chaotic) Hamiltonian systems. Additionally, we perform a sensitivity analysis of the loss functions under normally distributed perturbations, supporting the favorable performance of MII.Comment: 23 pages, 10 figure

    Port-Hamiltonian Neural Networks with State-Dependent Ports

    Get PDF
    Hybrid machine learning based on Hamiltonian formulations has recently been successfully demonstrated for simple mechanical systems, both energy conserving and not energy conserving. We show that port-Hamiltonian neural network models can be used to learn external forces acting on a system. We argue that this property is particularly useful when the external forces are state dependent, in which case it is the port-Hamiltonian structure that facilitates the separation of internal and external forces. Numerical results are provided for a forced and damped mass-spring system and a tank system of higher complexity, and a symmetric fourth-order integration scheme is introduced for improved training on sparse and noisy data.Comment: 21 pages, 12 figures; v3: restructured the paper for more clarity, major changes to the text, updated plot

    Order theory for discrete gradient methods

    No full text
    The discrete gradient methods are integrators designed to preserve invariants of ordinary differential equations. From a formal series expansion of a subclass of these methods, we derive conditions for arbitrarily high order. We derive specific results for the average vector field discrete gradient, from which we get P-series methods in the general case, and B-series methods for canonical Hamiltonian systems. Higher order schemes are presented, and their applications are demonstrated on the Hénon–Heiles system and a Lotka–Volterra system, and on both the training and integration of a pendulum system learned from data by a neural network
    corecore